23 research outputs found

    Euclidean Distance Matrices: Essential Theory, Algorithms and Applications

    Get PDF
    Euclidean distance matrices (EDM) are matrices of squared distances between points. The definition is deceivingly simple: thanks to their many useful properties they have found applications in psychometrics, crystallography, machine learning, wireless sensor networks, acoustics, and more. Despite the usefulness of EDMs, they seem to be insufficiently known in the signal processing community. Our goal is to rectify this mishap in a concise tutorial. We review the fundamental properties of EDMs, such as rank or (non)definiteness. We show how various EDM properties can be used to design algorithms for completing and denoising distance data. Along the way, we demonstrate applications to microphone position calibration, ultrasound tomography, room reconstruction from echoes and phase retrieval. By spelling out the essential algorithms, we hope to fast-track the readers in applying EDMs to their own problems. Matlab code for all the described algorithms, and to generate the figures in the paper, is available online. Finally, we suggest directions for further research.Comment: - 17 pages, 12 figures, to appear in IEEE Signal Processing Magazine - change of title in the last revisio

    Phase Retrieval for Sparse Signals: Uniqueness Conditions

    Get PDF
    In a variety of fields, in particular those involving imaging and optics, we often measure signals whose phase is missing or has been irremediably distorted. Phase retrieval attempts the recovery of the phase information of a signal from the magnitude of its Fourier transform to enable the reconstruction of the original signal. A fundamental question then is: "Under which conditions can we uniquely recover the signal of interest from its measured magnitudes?" In this paper, we assume the measured signal to be sparse. This is a natural assumption in many applications, such as X-ray crystallography, speckle imaging and blind channel estimation. In this work, we derive a sufficient condition for the uniqueness of the solution of the phase retrieval (PR) problem for both discrete and continuous domains, and for one and multi-dimensional domains. More precisely, we show that there is a strong connection between PR and the turnpike problem, a classic combinatorial problem. We also prove that the existence of collisions in the autocorrelation function of the signal may preclude the uniqueness of the solution of PR. Then, assuming the absence of collisions, we prove that the solution is almost surely unique on 1-dimensional domains. Finally, we extend this result to multi-dimensional signals by solving a set of 1-dimensional problems. We show that the solution of the multi-dimensional problem is unique when the autocorrelation function has no collisions, significantly improving upon a previously known result.Comment: submitted to IEEE TI

    Sensing the real world:inverse problems, sparsity and sensor placement

    Get PDF
    A sensor is a device that detects or measures a physical property and records, indicates, or otherwise responds to it. In other words, a sensor allows us to interact with the surrounding environment, by measuring qualitatively or quantitatively a given phenomena. Biological evolution provided every living entity with a set of sensors to ease the survival to daily challenges. In addition to the biological sensors, humans developed and designed “artificial” sensors with the aim of improving our capacity of sensing the real world. Today, thanks to technological developments, sensors are ubiquitous and thus, we measure an exponentially growing amount of data. Here is the challenge—how do we process and use this data? Nowadays, it is common to design real-world sensing architectures that use the measured data to estimate certain parameters of the measured physical field. This type of problems are known in mathematics as inverse problems and finding their solution is challenging. In fact, we estimate a set of parameters of a physical field with possibly infinite degrees of freedom with only a few measurements, that are most likely corrupted by noise. Therefore, we would like to design algorithms to solve the given inverse problem, while ensuring the existence of the solution, its uniqueness and its robustness to the measurement noise. In this thesis, we tackle different inverse problems, all inspired by real-world applications. First, we propose a new regularization technique for linear inverse problems based on the sensor placement optimization of the sensor network collecting the data. We propose Frame- Sense, a greedy algorithm inspired by frame theory that finds a near-optimal sensor placement with respect to the reconstruction error of the inverse problem solution in polynomial time. We substantiate our theoretical findings with numerical simulations showing that our method improves the state of the art. In particular, we show significant improvements on two realworld applications: the thermal monitoring of many-core processors and the adaptive sampling scheduling of environmental sensor networks. Second, we introduce the dual of the sensor placement problem, namely the source placement problem. In this case, instead of regularizing the inverse problem, we enable a precise control of the physical field by means of a forward problem. For this problem, we propose a near-optimal algorithm for the noiseless case, that is when we know exactly the current state of the physical field. Third, we consider a family of physical phenomena that can be modeled by means of graphs, where the nodes represent a set of entities and the edges model the transmission delay of an information between the entities. Examples of this phenomena are the spreading of a virus within the population of a given region or the spreading of a rumor on a social network. In this scenario, we identify two new key problems: the source placement and vaccination. For the former, we would like to find a set of sources such that the spreading of the information over the network is as fast as possible. For the latter, we look for an optimal set of nodes to be “vaccinated” such that the spreading of the virus is the slowest. For both problems, we propose greedy algorithms directly optimizing the average time of infection of the network. Such algorithms out-perform the current state of the art and we evaluate their performance with a set of experiments on synthetic datasets. Then, we discuss three distinct inverse problems for physical fields characterized by a diffusive phenomena, such as temperature of solid bodies or the dispersion of pollution in the atmosphere. We first study the uniform sampling and reconstruction of diffusion fields and we show that we can exploit the kernel of the field to control and bound the aliasing error. Second, we study the source estimation of a diffusive field given a set of spatio-temporal measurements of the field and under the assumption that the sources can be modeled as a set of Dirac’s deltas. For this estimation problem, we propose an algorithm that exploits the eigenfunctions representation of the diffusion field and we show that this algorithm recovers the sources precisely. Third, we propose an algorithm for the estimation of time-varying emissions of smokestacks from the data collected in the surrounding environment by a sensor network, under the assumption that the emission rates can be modeled as signals lying on low-dimensional subspaces or with a finite rate of innovation. Last, we analyze a classic non-linear inverse problem, namely the sparse phase retrieval. In such a problem, we would like to estimate a signal from just the magnitude of its Fourier transform. Phase retrieval is of interest for many scientific applications, such as X-ray crystallography and astronomy. We assume that the signal of interest is spatially sparse, as it happens for many applications, and we model it as a linear combination of Dirac’s delta. We derive sufficient conditions for the uniqueness of the solution based on the support of the autocorrelation function of the measured sparse signal. Finally, we propose a reconstruction algorithm for the sparse phase retrieval taking advantage of the sparsity of the signal of interest

    Near-Optimal Source Placement for Linear Physical Fields

    Get PDF
    In real-word applications, signal processing is often used to measure and control a physical field by means of sensors and sources, respectively. An aspect that has been often neglected is the optimization of the sources' locations. In this work, we discuss the source placement problem as the dual of the sensor placement problem and propose two polynomial-time algorithms, for scenarios with or without noise. Both algorithms are near-optimal and indicate the possibility to make the control of such physical fields easier, more efficient and stabler to noise

    Relax and Unfold: Microphone Localization with Euclidean Distance Matrices

    Get PDF
    Recent methods for localization of microphones in a microphone array exploit sound sources at a priori unknown locations. This is convenient for ad-hoc arrays, as it requires little additional infrastructure. We propose a flexible localization algorithm by first recognizing the problem as an instance of multidimensional unfolding (MDU)—a classical problem in Euclidean geometry and psychometrics—and then solving the MDU as a special case of Euclidean distance matrix (EDM) completion. We solve the EDM completion using a semidefinite relaxation. In contrast to existing methods, the semidefinite formulation allows us to elegantly handle missing pairwise distance information, but also to incorporate various prior information about the distances between the pairs of microphones or sources, bounds on these distances, or ordinal information such as “microphones 1 and 2 are more apart than microphones 1 and 15”. The intuition that this should improve the localization performance is confirmed by numerical experiments

    Assessing printability of a very-large-scale integration design

    Get PDF
    Printability of a very-large-scale integration design is assessed by: during a training phase, generating a training set of very-large-scale integration design shapes representative of a population of very-large-scale integration design shapes, obtaining a set of mathematical representations of respective shapes in the training set, identifying at least two classes of physical events causally linked to the printability for the very-large-scale integration design shapes, each of the classes being associated to a respective level of printability, labeling each mathematical representation of the set according to one of the identified classes, based on a lithography model, and selecting a probabilistic model function maximizing a probability of a class, given the set of mathematical representations; and during a testing phase, providing a very-large-scale integration design shape to be tested, testing the provided very-large-scale integration design shape, and labeling the provided very-large-scale integration design shape according to the identified class

    DASS: Distributed Adaptive Sparse Sensing

    Full text link

    Near-optimal thermal monitoring framework for many-core systems on chip

    Get PDF
    Chip designers place on-chip thermal sensors to measure local temperatures, thus preventing thermal runaway situations in many-core processing architectures. However, the quality of the thermal reconstruction is directly dependent on the number of placed sensors, which should be minimized, while guaranteeing full detection of all the worst case temperature gradient. In this paper, we present an entire framework for the thermal management of complex many-core architectures, such that we can precisely recover the thermal distribution from a minimal number of sensors. The proposed sensor placement algo- rithm is guaranteed to reduce the impact of noisy measurements on the reconstructed thermal distribution. We achieve significant improvements compared to the state of the art, in terms of both computational complexity and reconstruction precision. For example, if we consider a 64 cores SoC with 64 noisy sensors (σ^2 = 4), we achieve an average reconstruction error of 1.5C, that is less than the half of what previous state-of-the-art methods achieve. We also study the practical limits of the proposed method and show that we do not need realistic workloads to learn the model and efficiently place the sensors. In fact, we show that the reconstruction error is not significantly increased if we randomly generate the power-traces of the components or if we have just a part of the correct workload

    Super Resolution Phase Retrieval for Sparse Signals

    Get PDF
    In a variety of fields, in particular those involving imaging and optics, we often measure signals whose phase is missing or has been irremediably distorted. Phase retrieval attempts to recover the phase information of a signal from the magnitude of its Fourier transform to enable the reconstruction of the original signal. Solving the phase retrieval problem is equivalent to recovering a signal from its auto-correlation function. In this paper, we assume the original signal to be sparse; this is a natural assumption in many applications, such as X-ray crystallography, speckle imaging and blind channel estimation. We propose an algorithm that resolves the phase retrieval problem in three stages: i) we leverage the finite rate of innovation sampling theory to super-resolve the auto-correlation function from a limited number of samples, ii) we design a greedy algorithm that identifies the locations of a sparse solution given the super-resolved auto-correlation function, iii) we recover the amplitudes of the atoms given their locations and the measured auto-correlation function. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind. Along with the algorithm, we derive its performance bound with a theoretical analysis and propose a set of enhancements to improve its computational complexity and noise resilience. Finally, we demonstrate the benefits of the proposed method via a comparison against Charge Flipping, a notable algorithm in crystallography

    The Fukushima Inverse Problem

    Get PDF
    Knowing what amount of radioactive material was released from Fukushima in March 2011 and at what time instants is crucial to assess the risk, the pollution, and to understand the scope of the consequences. Moreover, it could be used in forward simulations to obtain accurate maps of deposition. But these data are often not publicly available. We propose to estimate the emission waveforms by solving an inverse problem. Previous approaches have relied on a detailed expert guess of how the releases appeared, and they produce a solution strongly biased by this guess. If we plant a nonexistent peak in the guess, the solution also exhibits a nonexistent peak. We pro- pose a method that solves the Fukushima inverse problem blindly. Using atmospheric dispersion models and worldwide radioactivity measurements together with sparse regularization, the method correctly reconstructs the times of major events during the accident, and gives plausible estimates of the released quantities of Xenon
    corecore